The security of artificial intelligence (AI) is an important research area towards safe, reliable, and trustworthy AI systems. To accelerate the research on AI security, the Artificial Intelligence Security Competition (AISC) was organized by the Zhongguancun Laboratory, China Industrial Control Systems Cyber Emergency Response Team, Institute for Artificial Intelligence, Tsinghua University, and RealAI as part of the Zhongguancun International Frontier Technology Innovation Competition (https://www.zgc-aisc.com/en). The competition consists of three tracks, including Deepfake Security Competition, Autonomous Driving Security Competition, and Face Recognition Security Competition. This report will introduce the competition rules of these three tracks and the solutions of top-ranking teams in each track.
translated by 谷歌翻译
Large-scale pre-trained language models (PLMs) bring new opportunities to challenge problems, especially those that need high-level intelligence, such as the math word problem (MWPs). However, directly applying existing PLMs to MWPs can fail as the generation process lacks sufficient supervision and thus lacks fast adaptivity as humans. We notice that human reasoning has a dual reasoning framework that consists of an immediate reaction system (system 1) and a delicate reasoning system (system 2), where the entire reasoning is determined by their interaction. This inspires us to develop a cooperative reasoning-induced PLM for solving MWPs, called Cooperative Reasoning (CoRe), resulting in a human-like reasoning architecture with system 1 as the generator and system 2 as the verifier. In our approach, the generator is responsible for generating reasoning paths, and the verifiers are used to supervise the evaluation in order to obtain reliable feedback for the generator. We evaluate our CoRe framework on several mathematical reasoning datasets and achieve decent improvement over state-of-the-art methods, up to 9.8% increase over best baselines.
translated by 谷歌翻译
如今,基础模型已成为人工智能中的基本基础设施之一,铺平了通往通用情报的方式。但是,现实提出了两个紧急挑战:现有的基础模型由英语社区主导;用户通常会获得有限的资源,因此不能总是使用基础模型。为了支持中文社区的发展,我们介绍了一个名为Fengshenbang的开源项目,该项目由认知计算与自然语言研究中心(CCNL)领导。我们的项目具有全面的功能,包括大型预培训模型,用户友好的API,基准,数据集等。我们将所有这些都包装在三个子项目中:风水次模型,风水框架和狂热基准。 Fengshenbang的开源路线图旨在重新评估中国预培训的大型大型模型的开源社区,促使整个中国大型模型社区的发展。我们还希望构建一个以用户为中心的开源生态系统,以允许个人访问所需的模型以匹配其计算资源。此外,我们邀请公司,大学和研究机构与我们合作建立大型开源模型的生态系统。我们希望这个项目将成为中国认知情报的基础。
translated by 谷歌翻译
最近提出的检测变压器(DETR)已建立了一个完全端到端的范式以进行对象检测。但是,DETR遭受慢训练的融合,这阻碍了其对各种检测任务的适用性。我们观察到,由于对象查询和编码图像特征之间的语义不一致,DETR的缓慢收敛在很大程度上归因于将对象查询与相关区域匹配的困难。通过此观察,我们设计了与DETR ++(SAM-DETR ++)设计的语义对齐匹配,以加速DETR的收敛并改善检测性能。 SAM-DETR ++的核心是一个插件模块,该模块将对象查询和编码图像功能投射到相同的功能嵌入空间中,在该空间中,每个对象查询都可以轻松地与具有相似语义的相关区域匹配。此外,SAM-DETR ++搜索了多个代表性关键点,并利用其功能以具有增强的表示能力的语义对齐匹配。此外,SAM-DETR ++可以根据设计的语义对准匹配,以粗到5的方式有效地融合多尺度特征。广泛的实验表明,所提出的SAM-DETR ++实现了优越的收敛速度和竞争性检测准确性。此外,作为一种插件方法,SAM-DETR ++可以以更好的性能补充现有的DITR收敛解决方案,仅使用12个训练时代获得44.8%的AP和49.1%的AP,并使用Resnet-50上的CoCo Val2017上的50个训练时代获得50个训练时期。代码可在https://github.com/zhanggongjie/sam-detr上找到。
translated by 谷歌翻译
域自适应综合分段旨在通过利用一个或多个相关源域中的现成注释数据来减轻数据注释挑战。但是,现有研究采用两个网络,例如分割和语义分割,从而导致大量网络参数具有复杂和计算强化的培训和推理过程。我们设计了UnIdaps,这是一个统一的自适应泛型分割网络,它很简单,但能够在单个网络中同时实现域自适应实例分割和语义分割。 UNIDAPS引入了层次掩码校准(HMC),该层面校准校正了预测的伪掩模,伪超像素和伪像素,并通过即时的在线自我训练过程进行网络重新训练。它具有三个唯一的功能:1)它可以启用统一的域自适应全景适应; 2)它可以缓解虚假预测并有效地改善域的自适应圆形分割; 3)它是端到端的训练,具有较少的参数,更简单的培训和推理管道。对多个公共基准测试的广泛实验表明,与最先进的艺术品相比,UNIDAPS可以实现优越的域自适应泛型分割。
translated by 谷歌翻译
即使预训练的语言模型共享语义编码器,自然语言的理解也遭受了各种输出模式的影响。在本文中,我们提出了基于BERT框架的统一双向语言理解模型Ubert,它可以通过Biaffine网络普遍地对不同NLU任务的训练对象进行建模。具体而言,Ubert从各个方面编码先验知识,统一地构建了多个NLU任务的学习表示,这有利于增强捕获共同语义理解的能力。使用Biaffine来模拟原始文本的开始和末端位置对,可以将各种分类和提取结构转换为通用的跨度编码方法。实验表明,UBERT在7个NLU任务,14个数据集和零拍设置上实现了最先进的性能,并实现了广泛的信息提取和语言推理任务的统一。
translated by 谷歌翻译
对于移动设备上的实际深度神经网络设计,必须考虑计算资源产生的约束以及各种应用中的推理延迟。在深度网络加速相关方法中,修剪是广泛采用的做法,以平衡计算资源消耗和准确性,可以在明智地或随机地拆除通道的不重要连接,并对模型精度的最小影响最小。信道修剪立即导致显着的延迟降低,而随机重量灌注更加灵活,以平衡延迟和精度。在本文中,我们介绍了一个统一的框架,具有联合通道修剪和重量修剪(JCW),并且在比以前的模型压缩方法的延迟和准确性之间实现更好的静脉前沿。为了完全优化延迟和准确性之间的权衡,我们在JCW框架中开发了一定量身定制的多目标进化算法,这使得一个搜索能够获得各种部署要求的最佳候选架构。广泛的实验表明,JCW在想象集分类数据集上的各种最先进的修剪方法之间实现了更好的折衷和准确性。我们的代码在https://github.com/jcw-anonymous/jcw提供。
translated by 谷歌翻译
培训有效的生成对抗性网络(GANS)需要大量的培训数据,但是训练型模型通常是用鉴别器过度拟合的次优。通过大规模和手工制作的数据增强,通过扩大有限培训数据的分布来解决此问题的几项问题。我们从一个非常不同的角度处理数据限制图像生成。具体而言,我们设计Genco,这是一种生成的共同培训网络,通过引入多种互补鉴别者来减轻鉴别者过度拟合问题,这些判别符号在培训中提供多种独特的观点来提供不同的监督。我们以两种方式实例化了Genco的想法。首先是重量差异共同训练(WECO),其通过多样化它们的参数共同列举多个独特的鉴别器。第二种方式是数据差异共同训练(DACO),其通过馈送具有输入图像的不同视图的鉴别器(例如,输入图像的不同频率分量)来实现共同训练。在多个基准上进行广泛的实验表明,Genco实现了具有有限培训数据的优异发电。此外,Genco还通过组合时补充了增强方法,并在结合时进行了一致和明确的性能。
translated by 谷歌翻译
Leveraging the advances of natural language processing, most recent scene text recognizers adopt an encoder-decoder architecture where text images are first converted to representative features and then a sequence of characters via `sequential decoding'. However, scene text images suffer from rich noises of different sources such as complex background and geometric distortions which often confuse the decoder and lead to incorrect alignment of visual features at noisy decoding time steps. This paper presents I2C2W, a novel scene text recognition technique that is tolerant to geometric and photometric degradation by decomposing scene text recognition into two inter-connected tasks. The first task focuses on image-to-character (I2C) mapping which detects a set of character candidates from images based on different alignments of visual features in an non-sequential way. The second task tackles character-to-word (C2W) mapping which recognizes scene text by decoding words from the detected character candidates. The direct learning from character semantics (instead of noisy image features) corrects falsely detected character candidates effectively which improves the final text recognition accuracy greatly. Extensive experiments over nine public datasets show that the proposed I2C2W outperforms the state-of-the-art by large margins for challenging scene text datasets with various curvature and perspective distortions. It also achieves very competitive recognition performance over multiple normal scene text datasets.
translated by 谷歌翻译
The image recapture attack is an effective image manipulation method to erase certain forensic traces, and when targeting on personal document images, it poses a great threat to the security of e-commerce and other web applications. Considering the current learning-based methods suffer from serious overfitting problem, in this paper, we propose a novel two-branch deep neural network by mining better generalized recapture artifacts with a designed frequency filter bank and multi-scale cross-attention fusion module. In the extensive experiment, we show that our method can achieve better generalization capability compared with state-of-the-art techniques on different scenarios.
translated by 谷歌翻译